Goto

Collaborating Authors

 human-like memory


Explore, Select, Derive, and Recall: Augmenting LLM with Human-like Memory for Mobile Task Automation

Lee, Sunjae, Choi, Junyoung, Lee, Jungjae, Choi, Hojun, Ko, Steven Y., Oh, Sangeun, Shin, Insik

arXiv.org Artificial Intelligence

The advent of large language models (LLMs) has opened up new opportunities in the field of mobile task automation. Their superior language understanding and reasoning capabilities allow users to automate complex and repetitive tasks. However, due to the inherent unreliability and high operational cost of LLMs, their practical applicability is quite limited. To address these issues, this paper introduces MemoDroid, an innovative LLM-based mobile task automator enhanced with a unique app memory. MemoDroid emulates the cognitive process of humans interacting with a mobile app -- explore, select, derive, and recall. This approach allows for a more precise and efficient learning of a task's procedure by breaking it down into smaller, modular components that can be re-used, re-arranged, and adapted for various objectives. We implement MemoDroid using online LLMs services (GPT-3.5 and GPT-4) and evaluate its performance on 50 unique mobile tasks across 5 widely used mobile apps. The results indicate that MemoDroid can adapt learned tasks to varying contexts with 100% accuracy and reduces their latency and cost by 69.22% and 77.36% compared to a GPT-4 powered baseline.


Google's DeepMind gives an AI human-like memory to solve tough problems

#artificialintelligence

With the advances of modern data storage technology, chips the size of your fingernail are capable of storing an entire library's worth of knowledge, so one thing you might think computers do better than people is remember things. But according to Google Inc.'s DeepMind team, the artificial intelligence research group that developed AlphaGo, that is not entirely true. In a new paper published in the journal Nature, DeepMind has outlined a process where it trained a neural network to have human-like memory, giving it not only the ability to store data, but also to recall that information and use it to solve novel problems. "Neural networks excel at pattern recognition and quick, reactive decision-making, but we are only just beginning to build neural networks that can think slowly – that is, deliberate or reason using knowledge," the DeepMind team wrote in a recent blog post. "For example, how could a neural network store memories for facts like the connections in a transport network and then logically reason about its pieces of knowledge to answer questions?" DeepMind calls its new method differentiable neural computers, and the team demonstrated its capabilities using the London Underground, one of the largest public transit systems in the world.